Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 15.398
Filtrar
1.
Sci Rep ; 14(1): 10491, 2024 05 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714729

RESUMEN

Dogs (Canis lupus familiaris) are the domestically bred descendant of wolves (Canis lupus). However, selective breeding has profoundly altered facial morphologies of dogs compared to their wolf ancestors. We demonstrate that these morphological differences limit the abilities of dogs to successfully produce the same affective facial expressions as wolves. We decoded facial movements of captive wolves during social interactions involving nine separate affective states. We used linear discriminant analyses to predict affective states based on combinations of facial movements. The resulting confusion matrix demonstrates that specific combinations of facial movements predict nine distinct affective states in wolves; the first assessment of this many affective facial expressions in wolves. However, comparative analyses with kennelled rescue dogs revealed reduced ability to predict affective states. Critically, there was a very low predictive power for specific affective states, with confusion occurring between negative and positive states, such as Friendly and Fear. We show that the varying facial morphologies of dogs (specifically non-wolf-like morphologies) limit their ability to produce the same range of affective facial expressions as wolves. Confusion among positive and negative states could be detrimental to human-dog interactions, although our analyses also suggest dogs likely use vocalisations to compensate for limitations in facial communication.


Asunto(s)
Domesticación , Emociones , Expresión Facial , Lobos , Animales , Lobos/fisiología , Perros , Emociones/fisiología , Masculino , Femenino , Conducta Animal/fisiología , Humanos
2.
Sci Rep ; 14(1): 10607, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38719866

RESUMEN

Guilt is a negative emotion elicited by realizing one has caused actual or perceived harm to another person. One of guilt's primary functions is to signal that one is aware of the harm that was caused and regrets it, an indication that the harm will not be repeated. Verbal expressions of guilt are often deemed insufficient by observers when not accompanied by nonverbal signals such as facial expression, gesture, posture, or gaze. Some research has investigated isolated nonverbal expressions in guilt, however none to date has explored multiple nonverbal channels simultaneously. This study explored facial expression, gesture, posture, and gaze during the real-time experience of guilt when response demands are minimal. Healthy adults completed a novel task involving watching videos designed to elicit guilt, as well as comparison emotions. During the video task, participants were continuously recorded to capture nonverbal behaviour, which was then analyzed via automated facial expression software. We found that while feeling guilt, individuals engaged less in several nonverbal behaviours than they did while experiencing the comparison emotions. This may reflect the highly social aspect of guilt, suggesting that an audience is required to prompt a guilt display, or may suggest that guilt does not have clear nonverbal correlates.


Asunto(s)
Expresión Facial , Culpa , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Comunicación no Verbal/psicología , Emociones/fisiología , Gestos
3.
PLoS One ; 19(5): e0302782, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38713700

RESUMEN

Parents with a history of childhood maltreatment may be more likely to respond inadequately to their child's emotional cues, such as crying or screaming, due to previous exposure to prolonged stress. While studies have investigated parents' physiological reactions to their children's vocal expressions of emotions, less attention has been given to their responses when perceiving children's facial expressions of emotions. The present study aimed to determine if viewing facial expressions of emotions in children induces cardiovascular changes in mothers (hypo- or hyper-arousal) and whether these differ as a function of childhood maltreatment. A total of 104 mothers took part in this study. Their experiences of childhood maltreatment were measured using the Childhood Trauma Questionnaire (CTQ). Participants' electrocardiogram signals were recorded during a task in which they viewed a landscape video (baseline) and images of children's faces expressing different intensities of emotion. Heart rate variability (HRV) was extracted from the recordings as an indicator of parasympathetic reactivity. Participants presented two profiles: one group of mothers had a decreased HRV when presented with images of children's facial expressions of emotions, while the other group's HRV increased. However, HRV change was not significantly different between the two groups. The interaction between HRV groups and the severity of maltreatment experienced was marginal. Results suggested that experiences of childhood emotional abuse were more common in mothers whose HRV increased during the task. Therefore, more severe childhood experiences of emotional abuse could be associated with mothers' cardiovascular hyperreactivity. Maladaptive cardiovascular responses could have a ripple effect, influencing how mothers react to their children's facial expressions of emotions. That reaction could affect the quality of their interaction with their child. Providing interventions that help parents regulate their physiological and behavioral responses to stress might be helpful, especially if they have experienced childhood maltreatment.


Asunto(s)
Emociones , Expresión Facial , Frecuencia Cardíaca , Madres , Humanos , Femenino , Adulto , Frecuencia Cardíaca/fisiología , Niño , Emociones/fisiología , Madres/psicología , Abuso Emocional/psicología , Masculino , Electrocardiografía , Maltrato a los Niños/psicología , Relaciones Madre-Hijo/psicología , Encuestas y Cuestionarios
4.
J Psychiatry Neurosci ; 49(3): E145-E156, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38692692

RESUMEN

BACKGROUND: Neuroimaging studies have revealed abnormal functional interaction during the processing of emotional faces in patients with major depressive disorder (MDD), thereby enhancing our comprehension of the pathophysiology of MDD. However, it is unclear whether there is abnormal directional interaction among face-processing systems in patients with MDD. METHODS: A group of patients with MDD and a healthy control group underwent a face-matching task during functional magnetic resonance imaging. Dynamic causal modelling (DCM) analysis was used to investigate effective connectivity between 7 regions in the face-processing systems. We used a Parametric Empirical Bayes model to compare effective connectivity between patients with MDD and controls. RESULTS: We included 48 patients and 44 healthy controls in our analyses. Both groups showed higher accuracy and faster reaction time in the shape-matching condition than in the face-matching condition. However, no significant behavioural or brain activation differences were found between the groups. Using DCM, we found that, compared with controls, patients with MDD showed decreased self-connection in the right dorsolateral prefrontal cortex (DLPFC), amygdala, and fusiform face area (FFA) across task conditions; increased intrinsic connectivity from the right amygdala to the bilateral DLPFC, right FFA, and left amygdala, suggesting an increased intrinsic connectivity centred in the amygdala in the right side of the face-processing systems; both increased and decreased positive intrinsic connectivity in the left side of the face-processing systems; and comparable task modulation effect on connectivity. LIMITATIONS: Our study did not include longitudinal neuroimaging data, and there was limited region of interest selection in the DCM analysis. CONCLUSION: Our findings provide evidence for a complex pattern of alterations in the face-processing systems in patients with MDD, potentially involving the right amygdala to a greater extent. The results confirm some previous findings and highlight the crucial role of the regions on both sides of face-processing systems in the pathophysiology of MDD.


Asunto(s)
Amígdala del Cerebelo , Trastorno Depresivo Mayor , Reconocimiento Facial , Imagen por Resonancia Magnética , Humanos , Trastorno Depresivo Mayor/fisiopatología , Trastorno Depresivo Mayor/diagnóstico por imagen , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Amígdala del Cerebelo/diagnóstico por imagen , Amígdala del Cerebelo/fisiopatología , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Vías Nerviosas/fisiopatología , Vías Nerviosas/diagnóstico por imagen , Teorema de Bayes , Adulto Joven , Mapeo Encefálico , Expresión Facial , Persona de Mediana Edad , Tiempo de Reacción/fisiología
5.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38715407

RESUMEN

Facial palsy can result in a serious complication known as facial synkinesis, causing both physical and psychological harm to the patients. There is growing evidence that patients with facial synkinesis have brain abnormalities, but the brain mechanisms and underlying imaging biomarkers remain unclear. Here, we employed functional magnetic resonance imaging (fMRI) to investigate brain function in 31 unilateral post facial palsy synkinesis patients and 25 healthy controls during different facial expression movements and at rest. Combining surface-based mass-univariate analysis and multivariate pattern analysis, we identified diffused activation and intrinsic connection patterns in the primary motor cortex and the somatosensory cortex on the patient's affected side. Further, we classified post facial palsy synkinesis patients from healthy subjects with favorable accuracy using the support vector machine based on both task-related and resting-state functional magnetic resonance imaging data. Together, these findings indicate the potential of the identified functional reorganizations to serve as neuroimaging biomarkers for facial synkinesis diagnosis.


Asunto(s)
Parálisis Facial , Imagen por Resonancia Magnética , Sincinesia , Humanos , Imagen por Resonancia Magnética/métodos , Parálisis Facial/fisiopatología , Parálisis Facial/diagnóstico por imagen , Parálisis Facial/complicaciones , Masculino , Femenino , Sincinesia/fisiopatología , Adulto , Persona de Mediana Edad , Adulto Joven , Expresión Facial , Biomarcadores , Corteza Motora/fisiopatología , Corteza Motora/diagnóstico por imagen , Mapeo Encefálico , Corteza Somatosensorial/diagnóstico por imagen , Corteza Somatosensorial/fisiopatología , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Máquina de Vectores de Soporte
6.
Sci Rep ; 14(1): 10371, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38710806

RESUMEN

Emotion is a human sense that can influence an individual's life quality in both positive and negative ways. The ability to distinguish different types of emotion can lead researchers to estimate the current situation of patients or the probability of future disease. Recognizing emotions from images have problems concealing their feeling by modifying their facial expressions. This led researchers to consider Electroencephalography (EEG) signals for more accurate emotion detection. However, the complexity of EEG recordings and data analysis using conventional machine learning algorithms caused inconsistent emotion recognition. Therefore, utilizing hybrid deep learning models and other techniques has become common due to their ability to analyze complicated data and achieve higher performance by integrating diverse features of the models. However, researchers prioritize models with fewer parameters to achieve the highest average accuracy. This study improves the Convolutional Fuzzy Neural Network (CFNN) for emotion recognition using EEG signals to achieve a reliable detection system. Initially, the pre-processing and feature extraction phases are implemented to obtain noiseless and informative data. Then, the CFNN with modified architecture is trained to classify emotions. Several parametric and comparative experiments are performed. The proposed model achieved reliable performance for emotion recognition with an average accuracy of 98.21% and 98.08% for valence (pleasantness) and arousal (intensity), respectively, and outperformed state-of-the-art methods.


Asunto(s)
Electroencefalografía , Emociones , Lógica Difusa , Redes Neurales de la Computación , Humanos , Electroencefalografía/métodos , Emociones/fisiología , Masculino , Femenino , Adulto , Algoritmos , Adulto Joven , Procesamiento de Señales Asistido por Computador , Aprendizaje Profundo , Expresión Facial
7.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38566513

RESUMEN

The perception of facial expression plays a crucial role in social communication, and it is known to be influenced by various facial cues. Previous studies have reported both positive and negative biases toward overweight individuals. It is unclear whether facial cues, such as facial weight, bias facial expression perception. Combining psychophysics and event-related potential technology, the current study adopted a cross-adaptation paradigm to examine this issue. The psychophysical results of Experiments 1A and 1B revealed a bidirectional cross-adaptation effect between overweight and angry faces. Adapting to overweight faces decreased the likelihood of perceiving ambiguous emotional expressions as angry compared to adapting to normal-weight faces. Likewise, exposure to angry faces subsequently caused normal-weight faces to appear thinner. These findings were corroborated by bidirectional event-related potential results, showing that adaptation to overweight faces relative to normal-weight faces modulated the event-related potential responses of emotionally ambiguous facial expression (Experiment 2A); vice versa, adaptation to angry faces relative to neutral faces modulated the event-related potential responses of ambiguous faces in facial weight (Experiment 2B). Our study provides direct evidence associating overweight faces with facial expression, suggesting at least partly common neural substrates for the perception of overweight and angry faces.


Asunto(s)
Expresión Facial , Prejuicio de Peso , Humanos , Sobrepeso , Ira/fisiología , Potenciales Evocados/fisiología , Emociones/fisiología
8.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654234

RESUMEN

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Asunto(s)
Depresión , Emociones , Apnea Obstructiva del Sueño , Humanos , Masculino , Persona de Mediana Edad , Apnea Obstructiva del Sueño/fisiopatología , Apnea Obstructiva del Sueño/psicología , Apnea Obstructiva del Sueño/complicaciones , Depresión/fisiopatología , Depresión/psicología , Depresión/complicaciones , Femenino , Adulto , Emociones/fisiología , Polisomnografía , Potenciales Evocados/fisiología , Electroencefalografía , Reconocimiento Facial/fisiología , Potenciales Evocados Visuales/fisiología , Expresión Facial
9.
Hum Brain Mapp ; 45(5): e26673, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38590248

RESUMEN

The amygdala is important for human fear processing. However, recent research has failed to reveal specificity, with evidence that the amygdala also responds to other emotions. A more nuanced understanding of the amygdala's role in emotion processing, particularly relating to fear, is needed given the importance of effective emotional functioning for everyday function and mental health. We studied 86 healthy participants (44 females), aged 18-49 (mean 26.12 ± 6.6) years, who underwent multiband functional magnetic resonance imaging. We specifically examined the reactivity of four amygdala subregions (using regions of interest analysis) and related brain connectivity networks (using generalized psycho-physiological interaction) to fear, angry, and happy facial stimuli using an emotional face-matching task. All amygdala subregions responded to all stimuli (p-FDR < .05), with this reactivity strongly driven by the superficial and centromedial amygdala (p-FDR < .001). Yet amygdala subregions selectively showed strong functional connectivity with other occipitotemporal and inferior frontal brain regions with particular sensitivity to fear recognition and strongly driven by the basolateral amygdala (p-FDR < .05). These findings suggest that amygdala specialization to fear may not be reflected in its local activity but in its connectivity with other brain regions within a specific face-processing network.


Asunto(s)
Encéfalo , Emociones , Femenino , Humanos , Emociones/fisiología , Miedo/psicología , Amígdala del Cerebelo/fisiología , Felicidad , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética , Expresión Facial
10.
PLoS One ; 19(4): e0301896, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38598520

RESUMEN

This study investigates whether humans recognize different emotions conveyed only by the kinematics of a single moving geometrical shape and how this competence unfolds during development, from childhood to adulthood. To this aim, animations in which a shape moved according to happy, fearful, or neutral cartoons were shown, in a forced-choice paradigm, to 7- and 10-year-old children and adults. Accuracy and response times were recorded, and the movement of the mouse while the participants selected a response was tracked. Results showed that 10-year-old children and adults recognize happiness and fear when conveyed solely by different kinematics, with an advantage for fearful stimuli. Fearful stimuli were also accurately identified at 7-year-olds, together with neutral stimuli, while, at this age, the accuracy for happiness was not significantly different than chance. Overall, results demonstrates that emotions can be identified by a single point motion alone during both childhood and adulthood. Moreover, motion contributes in various measures to the comprehension of emotions, with fear recognized earlier in development and more readily even later on, when all emotions are accurately labeled.


Asunto(s)
Emociones , Expresión Facial , Adulto , Niño , Humanos , Fenómenos Biomecánicos , Emociones/fisiología , Miedo , Felicidad
11.
PLoS One ; 19(4): e0290590, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38635525

RESUMEN

Spontaneous smiles in response to politicians can serve as an implicit barometer for gauging electorate preferences. However, it is unclear whether a subtle Duchenne smile-an authentic expression involving the coactivation of the zygomaticus major (ZM) and orbicularis oculi (OO) muscles-would be elicited while reading about a favored politician smiling, indicating a more positive disposition and political endorsement. From an embodied simulation perspective, we investigated whether written descriptions of a politician's smile would trigger morphologically different smiles in readers depending on shared or opposing political orientation. In a controlled reading task in the laboratory, participants were presented with subject-verb phrases describing left and right-wing politicians smiling or frowning. Concurrently, their facial muscular reactions were measured via electromyography (EMG) recording at three facial muscles: the ZM and OO, coactive during Duchenne smiles, and the corrugator supercilii (CS) involved in frowning. We found that participants responded with a Duchenne smile detected at the ZM and OO facial muscles when exposed to portrayals of smiling politicians of same political orientation and reported more positive emotions towards these latter. In contrast, when reading about outgroup politicians smiling, there was a weaker activation of the ZM muscle and no activation of the OO muscle, suggesting a weak non-Duchenne smile, while emotions reported towards outgroup politicians were significantly more negative. Also, a more enhanced frown response in the CS was found for ingroup compared to outgroup politicians' frown expressions. Present findings suggest that a politician's smile may go a long way to influence electorates through both non-verbal and verbal pathways. They add another layer to our understanding of how language and social information shape embodied effects in a highly nuanced manner. Implications for verbal communication in the political context are discussed.


Asunto(s)
Fragilidad , Sonrisa , Humanos , Sonrisa/fisiología , Lectura , Expresión Facial , Emociones/fisiología , Músculos Faciales/fisiología , Párpados
12.
Sci Rep ; 14(1): 9546, 2024 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-38664496

RESUMEN

The aim of the current study was to investigate the influence of both intra- and interspecific audiences on dogs' facial expressions and behaviours. Forty-six dogs were exposed to three test conditions in which a food reward, initially available, was denied when in the presence of either a human (Human condition) or a dog audience (Dog condition), or in the absence of a visible audience (Non-social condition). Salivary cortisol was collected to evaluate the stress/arousal activation in the different conditions. Compared to the Non-social condition, the presence of a conspecific evoked more facial expressions, according to the DogFACS (Facial Action Coding System, an anatomically based tool to analyze facial expressions in domestic dogs), (EAD105-Ears downward), displacement behaviours (AD137-Nose licking, AD37-Lip wiping), tail wagging, whining, and panting (AD126). When facing a conspecific, dogs assumed a more avoidant attitude, keeping a distance and not looking at the stimuli, compared to when in the presence of the human partner. Dogs also exhibited more facial expressions (EAD102-Ears Adductor, EAD104-Ears Rotator), displacement behaviours (AD137-Nose licking, AD37-Lip wiping), panting (AD126) and whining when facing the conspecific than the human partner. Post-test cortisol was not influenced by any condition, and no association between pre-test cortisol and behavioural variables was found, thus strong differences in the levels of stress/arousal were unlikely to be responsible for differences in behavior between conditions. Considering the current results in the context of the available literature, we suggest that the higher displacement behaviors exhibited with the conspecifics were likely due to an increased level of uncertainty regarding the situations.


Asunto(s)
Conducta Animal , Expresión Facial , Hidrocortisona , Animales , Perros , Conducta Animal/fisiología , Masculino , Hidrocortisona/metabolismo , Hidrocortisona/análisis , Femenino , Humanos , Saliva/metabolismo , Saliva/química , Estrés Psicológico , Conducta Social
13.
Sensors (Basel) ; 24(8)2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38676067

RESUMEN

Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.


Asunto(s)
Expresión Facial , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Cara/fisiología , Emociones/fisiología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
14.
Sensors (Basel) ; 24(8)2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38676235

RESUMEN

Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.


Asunto(s)
Aprendizaje Profundo , Emociones , Expresión Facial , Frecuencia Cardíaca , Humanos , Emociones/fisiología , Frecuencia Cardíaca/fisiología , Grabación en Video/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Cara/fisiología , Femenino , Masculino
15.
Cogn Emot ; 38(3): 296-314, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38678446

RESUMEN

Social exclusion is an emotionally painful experience that leads to various alterations in socio-emotional processing. The perceptual and emotional consequences that may arise from experiencing social exclusion can vary depending on the paradigm used to manipulate it. Exclusion paradigms can vary in terms of the severity and duration of the leading exclusion experience, thereby classifying it as either a short-term or long-term experience. The present study aimed to study the impact of exclusion on socio-emotional processing using different paradigms that caused experiencing short-term and imagining long-term exclusion. Ambiguous facial emotions were used as socio-emotional cues. In study 1, the Ostracism Online paradigm was used to manipulate short-term exclusion. In study 2, a new sample of participants imagined long-term exclusion through the future life alone paradigm. Participants of both studies then completed a facial emotion recognition task consisting of morphed ambiguous facial emotions. By means of Point of Subjective Equivalence analyses, our results indicate that the experience of short-term exclusion hinders recognising happy facial expressions. In contrast, imagining long-term exclusion causes difficulties in recognising sad facial expressions. These findings extend the current literature, suggesting that not all social exclusion paradigms affect socio-emotional processing similarly.


Asunto(s)
Emociones , Expresión Facial , Humanos , Femenino , Masculino , Adulto Joven , Adulto , Reconocimiento Facial , Distancia Psicológica , Aislamiento Social/psicología , Reconocimiento en Psicología , Adolescente
16.
Sci Rep ; 14(1): 9794, 2024 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684721

RESUMEN

Face perception is a major topic in vision research. Most previous research has concentrated on (holistic) spatial representations of faces, often with static faces as stimuli. However, faces are highly dynamic stimuli containing important temporal information. How sensitive humans are regarding temporal information in dynamic faces is not well understood. Studies investigating temporal information in dynamic faces usually focus on the processing of emotional expressions. However, faces also contain relevant temporal information without any strong emotional expression. To investigate cues that modulate human sensitivity to temporal order, we utilized muted dynamic neutral face videos in two experiments. We varied the orientation of the faces (upright and inverted) and the presence/absence of eye blinks as partial dynamic cues. Participants viewed short, muted, monochromic videos of models vocalizing a widely known text (National Anthem). Videos were played either forward (in the correct temporal order) or backward. Participants were asked to determine the direction of the temporal order for each video, and (at the end of the experiment) whether they had understood the speech. We found that face orientation, and the presence/absence of an eye blink affected sensitivity, criterion (bias) and reaction time: Overall, sensitivity was higher for upright compared to inverted faces, and in the condition where an eye blink was present compared to the condition without an eye blink. Reaction times were mostly faster in the conditions with higher sensitivity. A bias to report inverted faces as 'backward' observed in Experiment I, where upright and inverted faces were presented randomly interleaved within each block, was absent when presenting upright and inverted faces in different blocks in Experiment II. Language comprehension results revealed that there was higher sensitivity when understanding the speech compared to not understanding the speech in both experiments. Taken together, our results showed higher sensitivity with upright compared to inverted faces, suggesting that the perception of dynamic, task-relevant information was superior with the canonical orientation of the faces. Furthermore, partial information coming from eye blinks, in addition to mouth movements, seemed to play a significant role in dynamic face perception, both when faces were presented upright and inverted. We suggest that studying the perception of facial dynamics beyond emotional expressions will help us to better understand the mechanisms underlying the temporal integration of facial information from different -partial and holistic- sources, and that our results show how different strategies, depending on the available information, are employed by human observers when judging the temporal order of faces.


Asunto(s)
Reconocimiento Facial , Humanos , Femenino , Masculino , Reconocimiento Facial/fisiología , Adulto , Adulto Joven , Tiempo de Reacción/fisiología , Expresión Facial , Parpadeo/fisiología , Estimulación Luminosa/métodos , Emociones/fisiología , Cara/fisiología , Señales (Psicología)
17.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 389-397, 2024 Apr 25.
Artículo en Chino | MEDLINE | ID: mdl-38686422

RESUMEN

Emotion recognition refers to the process of determining and identifying an individual's current emotional state by analyzing various signals such as voice, facial expressions, and physiological indicators etc. Using electroencephalogram (EEG) signals and virtual reality (VR) technology for emotion recognition research helps to better understand human emotional changes, enabling applications in areas such as psychological therapy, education, and training to enhance people's quality of life. However, there is a lack of comprehensive review literature summarizing the combined researches of EEG signals and VR environments for emotion recognition. Therefore, this paper summarizes and synthesizes relevant research from the past five years. Firstly, it introduces the relevant theories of VR and EEG signal emotion recognition. Secondly, it focuses on the analysis of emotion induction, feature extraction, and classification methods in emotion recognition using EEG signals within VR environments. The article concludes by summarizing the research's application directions and providing an outlook on future development trends, aiming to serve as a reference for researchers in related fields.


Asunto(s)
Electroencefalografía , Emociones , Realidad Virtual , Humanos , Emociones/fisiología , Expresión Facial
18.
Zhejiang Da Xue Xue Bao Yi Xue Ban ; 53(2): 254-260, 2024 Apr 25.
Artículo en Inglés, Chino | MEDLINE | ID: mdl-38650447

RESUMEN

Attention deficit and hyperactive disorder (ADHD) is a chronic neurodevelopmental disorder characterized by inattention, hyperactivity-impulsivity, and working memory deficits. Social dysfunction is one of the major challenges faced by children with ADHD. It has been found that children with ADHD can't perform as well as typically developing children on facial expression recognition (FER) tasks. Generally, children with ADHD have some difficulties in FER, while some studies suggest that they have no significant differences in accuracy of specific emotion recognition compared with typically developing children. The neuropsychological mechanisms underlying these difficulties are as follows. First, neuroanatomically. Compared to typically developing children, children with ADHD show smaller gray matter volume and surface area in the amygdala and medial prefrontal cortex regions, as well as reduced density and volume of axons/cells in certain frontal white matter fiber tracts. Second, neurophysiologically. Children with ADHD exhibit increased slow-wave activity in their electroencephalogram, and event-related potential studies reveal abnormalities in emotional regulation and responses to angry faces when facing facial stimuli. Third, psychologically. Psychosocial stressors may influence FER abilities in children with ADHD, and sleep deprivation in ADHD children may significantly increase their recognition threshold for negative expressions such as sadness and anger. This article reviews research progress over the past three years on FER abilities of children with ADHD, analyzing the FER deficit in children with ADHD from three dimensions: neuroanatomy, neurophysiology and psychology, aiming to provide new perspectives for further research and clinical treatment of ADHD.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Expresión Facial , Humanos , Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Trastorno por Déficit de Atención con Hiperactividad/psicología , Niño , Reconocimiento Facial/fisiología , Emociones
19.
J Pers Soc Psychol ; 126(3): 390-412, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38647440

RESUMEN

There is abundant evidence that emotion categorization is influenced by the social category membership of target faces, with target sex and target race modulating the ease with which perceivers can categorize happy and angry emotional expressions. However, theoretical interpretation of these findings is constrained by gender and race imbalances in both the participant samples and target faces typically used when demonstrating these effects (e.g., most participants have been White women and most Black targets have been men). Across seven experiments, the current research used gender-matched samples (Experiments 1a and 1b), gender- and racial identity-matched samples (Experiments 2a and 2b), and manipulations of social context (Experiments 3a, 3b, and 4) to establish whether emotion categorization is influenced by interactions between the social category membership of perceivers and target faces. Supporting this idea, we found the presence and size of the happy face advantage were influenced by interactions between perceivers and target social categories, with reliable happy face advantages in reaction times for ingroup targets but not necessarily for outgroup targets. White targets and female targets were the only categories associated with a reliable happy face advantage that was independent of perceiver category. The interactions between perceiver and target social category were eliminated when targets were blocked by social category (e.g., a block of all White female targets; Experiments 3a and 3b) and accentuated when targets were associated with additional category information (i.e., ingroup/outgroup nationality; Experiment 4). These findings support the possibility that contextually sensitive intergroup processes influence emotion categorization. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Procesos de Grupo , Felicidad , Percepción Social , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Identificación Social
20.
J Exp Psychol Gen ; 153(5): 1374-1387, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38647481

RESUMEN

A subcortical pathway is thought to have evolved to facilitate fear information transmission, but direct evidence for its existence in humans is lacking. In recent years, rapid, preattentive, and preconscious fear processing has been demonstrated, providing indirect support for the existence of the subcortical pathway by challenging the necessity of canonical cortical pathways in fear processing. However, direct support also requires evidence for the involvement of subcortical regions in fear processing. To address this issue, here we investigate whether fear processing reflects the characteristics of the subcortical structures in the hypothesized subcortical pathway. Using a monocular/dichoptic paradigm, Experiment 1 demonstrated a same-eye advantage for fearful but not neutral face processing, suggesting that fear processing relied on monocular neurons existing mainly in the subcortex. Experiments 2 and 3 further showed insensitivity to short-wavelength stimuli and a nasal-temporal hemifield asymmetry in fear processing, both of which were functional characteristics of the superior colliculus, a key hub of the subcortical pathway. Furthermore, all three experiments revealed a low spatial frequency selectivity of fear processing, consistent with magnocellular input via subcortical neurons. These results suggest a selective involvement of subcortical structures in fear processing, which, together with the indirect evidence for automatic fear processing, provides a more complete picture of the existence of a subcortical pathway for fear processing in humans. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Expresión Facial , Reconocimiento Facial , Miedo , Humanos , Miedo/fisiología , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Colículos Superiores/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA